50 research outputs found

    Techniques and Patterns for Safe and Efficient Real-Time Middleware

    Get PDF
    Over 90 percent of all microprocessors are now used for real-time and embedded applications. The behavior of these applications is often constrained by the physical world. It is therefore important to devise higher-level languages and middleware that meet conventional functional requirements, as well as dependably and productively enforce real-time constraints. Real-Time Java is emerging as a safe, real-time environment. In this thesis we use it as our experimentation platform; however, our findings are easily adapted to other similar platforms. This thesis provides the following contributions to the study of safe and efficient real-time middleware. First, it identifies potential bottlenecks and problem with respect to guaranteeing real-time performance in middleware. Second, it presents a series of techniques and patterns that allow the design and implementation of safe, predictable, and highly efficient real-time middleware. Third, it provides a set of architectural and design patterns that application developers can use when designing real-time systems. Finally, it provides a methodology for evaluating the merits and benefits of real-time middleware. Empirical results are presented using that methodology for the techniques presented in this thesis. The methodology helps compare the performance and predictability of general, real-time middleware platforms

    The Data Distribution Service – The Communication Middleware Fabric for Scalable and Extensible Systems-of-Systems

    Get PDF
    During the past several decades techniques and technologies have emerged to design and implement distributed systems effectively. A remaining challenge, however, to devising techniques and technologies that will help design and implement SoSs. SoSs present some unique challenges when compared to traditional systems since their scale, heterogeneity, extensibility, and evolvability requirement

    Managing the far-Edge: are today's centralized solutions a good fit

    Get PDF
    Edge computing has established itself as the foundation for next-generation mobile networks, IT infrastructure, and industrial systems thanks to promised low network latency, computation offloading, and data locality. These properties empower key use-cases like Industry 4.0, Vehicular Communication and Internet of Things. Nowadays implementation of Edge computing is based on extensions to available Cloud computing software tools. While this approach accelerates adoption, it hinders the deployment of the aforementioned use-cases that requires an infrastructure largely more decentralized than Cloud data centers, notably in the far-Edge of the network. In this context, this work aims at: (i) to analyze the differences between Cloud and Edge infrastructures, (ii) to analyze the architecture adopted by the most prominent open-source Edge computing solutions, and (iii) to experimentally evaluate those solutions in terms of scalability and service instantiation time in a medium-size far Edge system. Results show that mainstream Edge solutions require powerful centralized controllers and always-on connectivity, making them unsuitable for highly decentralized scenarios in the far-Edge where stable and high-bandwidth links are not ubiquitous.This work has been partially funded by the H2020 collaborative Europe/Taiwan research project 5G-DIVE (grant no. 589881) and by the H2020 European collaborative research project DAEMON (grant no. 101017109)

    JUNO: A Framework for Reconciling Scheduling Disciplines

    Get PDF
    Scheduling problems arise each time there is some form of resource contention. The problem addressed by scheduling disciplines is that of ordering the access to contended resources. The ordering is typically based on either (1) properties that are exposed by the entities that compete for the resource (like a deadline), or by (2) external properties (like the arrival order), or (3) a combination of both. In literature there exist many different scheduling algorithms, each of which has certain properties and an associated application domain. All these scheduling disciplines are based on the assumption that all the entities that compete for a resource are provided with the same collection of properties. This assumption makes sense in a closed environment; however it makes interoperability difficult for systems that have different scheduling algorithms and in which competitors migrate from one system to another. This problem is becoming evident in distributed computing environments like Object Request Brokers (ORB), Agent Frameworks, Load Balancing Systems, in which active components, which have usually QoS requirements, migrate through different endsystems. In such scenarios we cannot assume that all the endsystems will provide the same scheduling disciplines for all the resources that might be subject to scheduling. Even if they do, there might be a lack of a global knowledge that would make interoperability hard. In general, it is desirable that the QoS requirements exposed by any of these active components will be preserved and enforced even in face of presence of non-homogeneity, and migration. This thesi

    Efficient Memory-Reference Checks for Real-time Java

    No full text
    The scoped-memory feature is central to the Real-Time Specification for Java. It allows greater control over memory management, in particular the deallocation of objects without the use of a garbage collector. To preserve the safety of storage references associated with Java### since its inception, the use of scoped memory is constrained by a set of rules in the specification. While a program's adherence to the rules can be partially checked at compile-time, undecidability issues imply that some---perhaps, many---checks may be required at run-time. Poor implementations of those run-time checks could adversely affect overall performance and predictability, the latter being a founding principle of the specification
    corecore